手术场景细分对于促使机器人手术的认知援助至关重要。但是,以逐帧方式以像素为单位的注释视频是昂贵且耗时的。为了大大减轻标签负担,在这项工作中,我们从机器人手术视频中研究了半监督的场景细分,这实际上是必不可少的,但以前很少探索。我们考虑在等距采样下的临床上适当的注释情况。然后,我们提出了PGV-CL,这是一种新型的伪标签引导的跨视频对比学习方法,以增强场景分割。它有效地利用了未标记的数据来实现可信赖和全球模型的正则化,从而产生更具歧视性的特征表示。具体来说,对于可信赖的表示学习,我们建议合并伪标签以指导对选择,从而获得更可靠的代表对像素对比度。此外,我们将代表学习空间从以前的图像级扩展到交叉视频,该图像可以捕获全球语义以使学习过程受益。我们广泛评估了公共机器人手术数据集Edovis18和公共白内障数据集Cadis的方法。实验结果证明了我们方法的有效性,在不同的标签比下始终超过了最先进的半监督方法,甚至超过了10.1%标签的destovis18上的全面监督培训。
translated by 谷歌翻译
自动手术场景细分是促进现代手术剧院认知智能的基础。以前的作品依赖于常规的聚合模块(例如扩张的卷积,卷积LSTM),仅利用局部环境。在本文中,我们提出了一个新颖的框架STSWINCL,该框架通过逐步捕获全球环境来探讨互补的视频内和访问间关系以提高细分性能。我们首先开发了层次结构变压器,以捕获视频内关系,其中包括来自邻居像素和以前的帧的富裕空间和时间提示。提出了一个联合时空窗口移动方案,以有效地将这两个线索聚集到每个像素嵌入中。然后,我们通过像素到像素对比度学习探索视频间的关系,该学习很好地结构了整体嵌入空间。开发了一个多源对比度训练目标,可以将视频中的像素嵌入和基础指导分组,这对于学习整个数据的全球属性至关重要。我们在两个公共外科视频基准测试中广泛验证了我们的方法,包括Endovis18 Challenge和Cadis数据集。实验结果证明了我们的方法的有希望的性能,这始终超过了先前的最新方法。代码可在https://github.com/yuemingjin/stswincl上找到。
translated by 谷歌翻译
膝关节骨关节炎(OA)是最常见的骨关节炎和伤残原因。软骨缺陷被认为是膝关节OA的主要表现,其通过磁共振成像(MRI)可见。因此,对膝关节软骨缺陷的早期检测和评估对于保护膝关节OA患者来说是重要的。通过这种方式,通过将卷积神经网络(CNNS)应用于膝关节MRI,已经在膝关节软骨缺陷评估中进行了许多尝试。然而,软骨的生理特性可能阻碍这种努力:软骨是薄的弯曲层,这意味着只有膝关节MRI中的一小部分体素可以有助于软骨缺陷评估;异构扫描方案进一步挑战CNN在临床实践中的可行性;基于CNN的膝关节软骨评估结果缺乏解释性。为了解决这些挑战,我们将软骨结构和外观模拟到膝关节MRI进入图表表示,该图表能够处理高度多样化的临床数据。然后,由软骨图表示指导,我们设计了一种具有自我关注机制的非欧几里德深度学习网络,提取本地和全局中的软骨功能,并通过可视化结果导出最终评估。我们的综合实验表明,该方法在膝关节软骨缺陷评估中产生了卓越的性能,以及其方便的可解释性3D可视化。
translated by 谷歌翻译
犹豫不决的模糊语言偏好关系(HFLPR)是有意义的,因为它为不确定性提供了意见表达的有效方法。为了提高HFLPR决策理论,本文介绍了一种基于可接受的一致性和共识测量与HFLPRS组决策的算法,涉及(1)定义犹豫不决的模糊语言几何一致性指数(HFLGCI)并提出过程对于HFLPR的一致性检查和不一致的不一致; (2)根据原始个人HFLPRS与整体完美HFLPR之间的相似性衡量组共识,然后建立共识的程序,确保包括决策者重量的确定。提出的两项程序的收敛性和单调性已被证明。进一步进行一些实验以研究所定义的HFLGCI的临界值,并进行比较分析以显示所提出的算法的有效性。给出了有关风险投资指导基金的绩效评估的案例,以说明所提出的算法的可用性。作为我们工作的应用,最终为决策者提供了在线决策门户,以利用所提出的算法来解决决策问题。
translated by 谷歌翻译
背景:2019年新型冠状病毒病(Covid-19)在世界范围内广泛传播,对人们的生活环境造成了巨大的威胁。目的:在计算断层扫描(CT)成像下,Covid-19病变的结构特征在不同情况下复杂且多样化。为了准确定位Covid-19病变并协助医生做出最好的诊断和治疗计划,在CT图像中为Covid-19病变分段提供了深度监督的集合学习网络。方法:考虑到大量CoVID-19 CT图像和相应的病变注释难以获得,采用转移学习策略来弥补缺点并减轻过度装备问题。基于现实,传统的单一深度学习框架难以有效提取Covid-19病变特征,这可能导致一些病变未被发现。为了克服这个问题,提出了一个深度监督的集合学习网络,与Covid-19病变分割的本地和全局特征相结合。结果:验证了该方法的性能在具有公共数据集的实验中验证。与手动注释相比,所提出的方法获得了0.7279的联盟(IOU)的高交叉点。结论:CT图像中的冠状病毒肺炎病变分割介绍了深度监督的集合学习网络。通过目视检查和定量评估验证了所提出的方法的有效性。实验结果表明,拟议的Mehtod在Covid-19病变细分中具有完美的性能。
translated by 谷歌翻译
在传统的视觉问题(VQG)中,大多数图像具有多个概念(例如,对象和类别),可以生成问题,但培训模型以模仿培训数据中给出的任意选择概念。这使得训练困难并且还造成评估问题 - 对于大多数图像而言,存在多个有效问题,但人类参考资料只捕获一个或多个。我们呈现指导视觉问题 - VQG的变体,它根据对问题类型和应该探索的对象的期望来解决基于分类信息的问题生成器。我们提出了两个变体:(i)明确指导的模型,使演员(人机或自动化)能够选择哪些对象和类别来生成问题; (ii)基于离散潜在变量的基于离散潜变量,了解了一个隐式导游的模型,该模型将了解条件的哪些对象和类别。在答案类别增强VQA数据集上评估所提出的模型,我们的定量结果显示了对现有技术的大大改进(超过9bleu-4增加)。人类评估验证指导有助于生成语法相干的问题,并与给定的图像和对象相关。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译